Goto

Collaborating Authors

 gay people


Grindr Goes 'AI-First' as It Strives to Be an 'Everything App for the Gay Guy'

WIRED

Grindr Goes'AI-First' as It Strives to Be an'Everything App for the Gay Guy' After controlling shareholders failed to take Grindr private and controversies over data and the banning of the phrase "No Zionists," Grindr's CEO opens up about AI, privacy, and big expansion plans. Every Grindr user is unique. South Koreans prefer open relationships. The highest percentage of self-proclaimed "daddies" call the US home, and Switzerland is overrun with twinks. Delivered by annual trend report Grindr Unwrapped, those critical insights offer the type of information that will help usher the company into its "AI-first" era where it's "the everything app for the gay guy," CEO George Arison tells WIRED. Grindr was the first to leverage geo-location tech when it burst onto the scene in 2009. Arison arrived at the company in 2022 from the world of automotive ecommerce.


Exploring LGBTQ+ Bias in Generative AI Answers across Different Country and Religious Contexts

Vicsek, Lilla, Vancsó, Anna, Zajko, Mike, Takacs, Judit

arXiv.org Artificial Intelligence

Previous discussions have highlighted the need for generative AI tools to become more culturally sensitive, yet often neglect the complexities of handling content about minorities, who are perceived differently across cultures and religions. Our study examined how two generative AI systems respond to homophobic statements with varying cultural and religious context information. Findings showed ChatGPT 3.5's replies exhibited cultural relativism, in contrast to Bard's, which stressed human rights and provided more support for LGBTQ+ issues. Both demonstrated significant change in responses based on contextual information provided in the prompts, suggesting that AI systems may adjust in their responses the degree and forms of support for LGBTQ+ people according to information they receive about the user's background. The study contributes to understanding the social and ethical implications of AI responses and argues that any work to make generative AI outputs more culturally diverse requires a grounding in fundamental human rights.


Hypothesis Engineering for Zero-Shot Hate Speech Detection

Goldzycher, Janis, Schneider, Gerold

arXiv.org Artificial Intelligence

Standard approaches to hate speech detection rely on sufficient available hate speech annotations. Extending previous work that repurposes natural language inference (NLI) models for zero-shot text classification, we propose a simple approach that combines multiple hypotheses to improve English NLI-based zero-shot hate speech detection. We first conduct an error analysis for vanilla NLI-based zero-shot hate speech detection and then develop four strategies based on this analysis. The strategies use multiple hypotheses to predict various aspects of an input text and combine these predictions into a final verdict. We find that the zero-shot baseline used for the initial error analysis already outperforms commercial systems and fine-tuned BERT-based hate speech detection models on HateCheck. The combination of the proposed strategies further increases the zero-shot accuracy of 79.4% on HateCheck by 7.9 percentage points (pp), and the accuracy of 69.6% on ETHOS by 10.0pp.


Q. If machine learning is so smart, how come AI models are such racist, sexist homophobes? A. Humans really suck

#artificialintelligence

For this research, computer scientists at the University of Southern California (USC) and the University of California, Los Angeles, probed two state-of-the-art natural language systems: OpenAI's small GPT-2 model, which sports 124 million parameters, and Google's recurrent neural network [PDF] – referred to as LM_1B in the Cali academics' paper [PDF] – that was trained using the 1 Billion Word Language Benchmark. Machine-learning code, it seems, picks up all of its prejudices from its human creators: the software ends up with sexist, racist, and homophobic tendencies by learning from books, articles, and webpages subtly, or not so subtly, laced with our social and cultural biases. Multiple experiments have demonstrated that trained language models assume doctors are male, and are more likely to associate positive terms with Western names popular in Europe and America than African-American names, for instance. "Despite the fact that biases in language models are well-known, there is a lack of systematic evaluation metrics for quantifying and analyzing such biases in language generation," Emily Sheng, first author of the study and a PhD student at the USC, told The Register. And so, to evaluate the output of GPT-2 and LM_1B in a systematic way, the researchers trained two separate text classifiers, one to measure bias, and the other to measure sentiment.


Facing Facts: Artificial Intelligence and the Resurgence of Physiognomy

#artificialintelligence

On the first day of school, a child looks into a digital camera linked to the school's computer. Upon a quick scan, the machine reports that the child's facial contours indicate a likelihood toward aggression, and she is tagged for extra supervision. Not far away, another artificial intelligence screening system scans a man's face. It deduces from his brow shape that he is likely to be introverted, and he is rejected for a sales job. Plastic surgeons, meanwhile, find themselves overwhelmed with requests for a "perfect" face that doesn't show any "bad" traits.


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. They wanted to protect gay people. "[Our] findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.